12 research outputs found

    OpenTox predictive toxicology framework: toxicological ontology and semantic media wiki-based OpenToxipedia

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The OpenTox Framework, developed by the partners in the OpenTox project (<url>http://www.opentox.org</url>), aims at providing a unified access to toxicity data, predictive models and validation procedures. Interoperability of resources is achieved using a common information model, based on the OpenTox ontologies, describing predictive algorithms, models and toxicity data. As toxicological data may come from different, heterogeneous sources, a deployed ontology, unifying the terminology and the resources, is critical for the rational and reliable organization of the data, and its automatic processing.</p> <p>Results</p> <p>The following related ontologies have been developed for OpenTox: a) Toxicological ontology – listing the toxicological endpoints; b) Organs system and Effects ontology – addressing organs, targets/examinations and effects observed in <it>in vivo</it> studies; c) ToxML ontology – representing semi-automatic conversion of the ToxML schema; d) OpenTox ontology– representation of OpenTox framework components: chemical compounds, datasets, types of algorithms, models and validation web services; e) ToxLink–ToxCast assays ontology and f) OpenToxipedia community knowledge resource on toxicology terminology.</p> <p>OpenTox components are made available through standardized REST web services, where every compound, data set, and predictive method has a unique resolvable address (URI), used to retrieve its Resource Description Framework (RDF) representation, or to initiate the associated calculations and generate new RDF-based resources.</p> <p>The services support the integration of toxicity and chemical data from various sources, the generation and validation of computer models for toxic effects, seamless integration of new algorithms and scientifically sound validation routines and provide a flexible framework, which allows building arbitrary number of applications, tailored to solving different problems by end users (e.g. toxicologists).</p> <p>Availability</p> <p>The OpenTox toxicological ontology projects may be accessed via the OpenTox ontology development page <url>http://www.opentox.org/dev/ontology</url>; the OpenTox ontology is available as OWL at <url>http://opentox.org/api/1 1/opentox.owl</url>, the ToxML - OWL conversion utility is an open source resource available at <url>http://ambit.svn.sourceforge.net/viewvc/ambit/branches/toxml-utils/</url></p

    Collaborative development of predictive toxicology applications

    Get PDF
    OpenTox provides an interoperable, standards-based Framework for the support of predictive toxicology data management, algorithms, modelling, validation and reporting. It is relevant to satisfying the chemical safety assessment requirements of the REACH legislation as it supports access to experimental data, (Quantitative) Structure-Activity Relationship models, and toxicological information through an integrating platform that adheres to regulatory requirements and OECD validation principles. Initial research defined the essential components of the Framework including the approach to data access, schema and management, use of controlled vocabularies and ontologies, architecture, web service and communications protocols, and selection and integration of algorithms for predictive modelling. OpenTox provides end-user oriented tools to non-computational specialists, risk assessors, and toxicological experts in addition to Application Programming Interfaces (APIs) for developers of new applications. OpenTox actively supports public standards for data representation, interfaces, vocabularies and ontologies, Open Source approaches to core platform components, and community-based collaboration approaches, so as to progress system interoperability goals

    Nano-Lazar: Read across Predictions for Nanoparticle Toxicities with Calculated and Measured Properties

    No full text
    The lazar framework for read across predictions was expanded for the prediction of nanoparticle toxicities, and a new methodology for calculating nanoparticle descriptors from core and coating structures was implemented. Nano-lazar provides a flexible and reproducible framework for downloading data and ontologies from the open eNanoMapper infrastructure, developing and validating nanoparticle read across models, open-source code and a free graphical interface for nanoparticle read-across predictions. In this study we compare different nanoparticle descriptor sets and local regression algorithms. Sixty independent crossvalidation experiments were performed for the Net Cell Association endpoint of the Protein Corona dataset. The best RMSE and r2 results originated from models with protein corona descriptors and the weighted random forest algorithm, but their 95% prediction interval is significantly less accurate than for models with simpler descriptor sets (measured and calculated nanoparticle properties). The most accurate prediction intervals were obtained with measured nanoparticle properties (no statistical significant difference (p &lt; 0.05) of RMSE and r2 values compared to protein corona descriptors). Calculated descriptors are interesting for cheap and fast high-throughput screening purposes. RMSE and prediction intervals of random forest models are comparable to protein corona models, but r2 values are significantly lower

    qsar-report ruby gem library

    No full text
    <p>QMRF and QPRF reporting extension to OpenTox ruby modules and lazar.<br> The QSAR-report gem was developed to extend the lazar and nano-lazar toxicity prediction application with QMRF and QPRF reporting features.<br> The library gem is independent from lazar or nano-lazar and can also be used in any other ruby code.</p> <p><strong>About:</strong></p> <p>Classes for QMRF and QPRF reporting.</p> <ul> <li><strong>OpenTox::QMRFReport</strong>:<br> Provides a ruby OpenTox class to prepare an initial version of a QMRF report.<br> The XML output is in QMRF version 1.3 and can be finalized with the QMRF editor 2.0 ( https://sourceforge.net/projects/qmrf/ )</li> <li><strong>OpenTox::QPRFReport</strong>:<br> Provides a ruby OpenTox class to prepare an initial version of a QPRF (version 1.1) report in JSON or HTML.</li> </ul> <p><strong>Usage:</strong></p> <p><strong>QMRF </strong></p> <p>create a new QMRF report, add some content and show output:</p> <blockquote>require "qsar-report"<br> <br> # create a new report<br> report = OpenTox::QMRFReport.new<br> <br> # add a title<br> report.value "QSAR_title", "My QSAR Title"<br> <br> # change 6.2 'Available information for the training set' set inchi and smiles to Yes<br> report.change_attributes "training_set_data", {:inchi => "Yes", :smiles => "Yes"}<br> <br> # add a publication to the publication catalog<br> report.change_catalog :publications_catalog, :publications_catalog_1, {:title => "MyName M (2016) My Publication Title, QSAR News, 10, 14-22", :url => "http://myqsarnewsmag.dom"}<br> <br> # link/reference the publication to the report bibliography<br> report.ref_catalog :bibliography, :publications_catalog, :publications_catalog_1<br> <br> # output<br> puts report.to_xml<br> <br> # validate a report (as created above) against qmrf.xsd<br> report.validate</blockquote> <p><strong>QPRF </strong></p> <p>create a new QPRF report, add some content and show output:</p> <blockquote>require "qsar-report"<br> # create a new QPRF report instance<br> report = OpenTox::QPRFReport.new<br> <br> # Set Title of the report<br> report.Title = "My QPRF Report"<br> <br> # Set Version<br> report.Version = "1"<br> <br> # Set Date<br> report.Date = Time.now.strftime("%Y/%m/%d")<br> <br> # Set the CAS number in chapter 1.1<br> report.value "1.1", "7732-18-5" # set CAS number for H²O<br> <br> # print HTML version<br> puts report.to_html<br> <br> # print formated JSON version<br> puts report.pretty_json<br>  </blockquote> <p><strong>Installation </strong></p> <blockquote>gem install qsar-report</blockquote> <p><strong>Documentation:</strong></p> <ul> <li>http://www.rubydoc.info/gems/qsar-report<br> RubyDoc.info Code documentation</li> <li>For full information on QSAR reporting see:<br> https://eurl-ecvam.jrc.ec.europa.eu/databases/jrc-qsar-model-database-and-qsar-model-reporting-formats<br> JRC QSAR Model Database and QSAR Model Reporting Formats</li> </ul

    nano-lazar

    No full text
    <p>Graphical user interface for nano-lazar read across models. Users can predict nanoparticle toxicities by entering (i) core and coating compounds or (ii) nanoparticle properties or (iii) interactions with human serum proteins.</p> <p>According to our knowledge this is the first program that predicts nanoparticle toxicities from computed properties alone. This makes model (i) especially well suited for cheap and fast nanoparticle toxicity assessments. A detailed description of methods and validation results can be found at https://github.com/enanomapper/nano-lazar-paper/blob/master/nano-lazar.pdf.</p> <p>lazar is a modular framework for read across predictions of chemical toxicities. Within the EU FP7 eNanoMapper project lazar was extended with capabilities to handle nanomaterial data, interfaces to other eNanoMapper services (databases and ontologies) and a stable and user-friendly graphical interface for nanoparticle read-across predictions. nano-lazar is the graphical interface to nanoparticle read-across predictions.</p

    lazar-rest

    No full text
    <p>REST API webservice for lazar and nano-lazar. lazar (lazy structure–activity relationships) is a modular framework for read across predictions of chemical toxicities. Within the European Union’s FP7 eNanoMapper project lazar was extended with capabilities to handle nanomaterial data, interfaces to other eNanoMapper services (databases and ontologies) and a stable and user-friendly graphical interface for nanoparticle read-across predictions. <strong>lazar-rest</strong> provides a new Restful webservice to this developments.</p

    Deliverable Report D3.3 Modules and services for linking and integration with third party databases

    No full text
    Understanding the biological effects of nanomaterials needs at least insight into the physicochemical identity; recent research has however shown how important the biological identity is in fully understanding the biological mechanisms. This requires, however, interlinking nanomaterial databases with databases from other domains. This deliverable reports on our efforts outlined in Tasks 3.4 and 3.6 to implement the Linked Data ideas to data in the nanosafety community, taking into account this recent guidance document, experimenting with a number of technical solutions to link data. We report on work that lead to the Resource Description Framework (RDF) support of the database, reusing the eNanoMapper ontology, and interlinking with other databases. We show how the RDF can be used and demonstrate its applicability with a few examples. The related deliverable D5.6 to follow is about data completeness and is also based on the output of this work

    Deliverable Report D6.2 eNanoMapper Year 2 Dissemination Report

    No full text
    The key objectives of WP6 are to disseminate and raise awareness of the scientific results, tools and applications developed in the eNanoMapper project among the user communities in academia and industry, and to provide training on these eNanoMapper tools, through online seminars, and other training events. In the second year of the project, the project partners have produced additional online seminars or webinars on selected topics as well as tutorials on tools developed in other work packages and we took part in several scientific meetings with presentations on eNanoMapper, and co-organized workshops and conferences including CompNanoTox 2015 and OpenTox 2015. Important scientific papers were also published including a contribution to a thematic issue on “Nanoinformatics for Environmental Health and Biomedicine”, on the eNanoMapper database for nanomaterial safety information. Additionally, the eNanoMapper project is well represented in the EU Nanosafety Cluster as WG4 (databases) is chaired by Egon Willighagen (UM) and WG8 (systems biology) is chaired by Bengt Fadeel (KI)
    corecore